# RoBERTa Fine-tuning
KAILAS
Apache-2.0
KAILAS is a Transformer model based on the RoBERTa architecture, specifically adapted for NASA's Science Mission Directorate applications to annotate Unified Astronomy Thesaurus labels.
Sequence Labeling English
K
adsabs
51
1
Roberta Base Ai Text Detection V1
Apache-2.0
A fine-tuned model based on RoBERTa-base for detecting AI-generated English text.
Text Classification
Transformers English

R
fakespot-ai
574
1
Deid Roberta I2b2 NER Medical Reports
MIT
This model is a fine-tuned version of obi/deid_roberta_i2b2 on an unknown dataset, primarily used for text processing tasks.
Sequence Labeling
Transformers

D
theekshana
28
1
Roberta Chinese Med Inquiry Intention Recognition Base
Apache-2.0
This model is a text classification model fine-tuned on RoBERTa, designed to recognize user input intent, distinguishing between medical consultation and casual conversation scenarios.
Text Classification Chinese
R
HZhun
27
2
Roberta Chinese Med Inquiry Intention Recognition Base
Apache-2.0
This model is a text classification model fine-tuned based on RoBERTa, specializing in identifying whether user input text belongs to medical consultation or casual conversation categories.
Text Classification Chinese
R
StevenZhun
23
1
Toxic Prompt Roberta
MIT
A RoBERTa-based text classification model for detecting toxic prompts and responses in dialogue systems
Text Classification
Transformers

T
Intel
416
7
Roberta Base Legal Multi Downstream Indian Ner
A RoBERTa model pre-trained on multilingual legal texts and fine-tuned for Indian legal named entity recognition tasks
Sequence Labeling
Transformers

R
MHGanainy
66
2
Emotion RoBERTa German6 V7
MIT
A German emotion classification model fine-tuned based on RoBERTa, capable of classifying text into six emotion categories: anger, fear, disgust, sadness, happiness, and none of the above.
Text Classification
Transformers German

E
visegradmedia-emotion
355
3
Topic Govt Regulation
A text classifier fine-tuned based on RoBERTa-large, specifically designed to determine whether news articles involve government regulation topics.
Text Classification
Transformers English

T
dell-research-harvard
15
2
Roberta Base Biomedical Clinical Es Ner
Apache-2.0
This model is a fine-tuned version of BSC-LT/roberta-base-biomedical-clinical-es for Named Entity Recognition (NER) tasks on Spanish biomedical clinical texts.
Sequence Labeling
Transformers

R
manucos
25
1
Fictoroberta
A text style classifier fine-tuned based on the RoBERTa base model, used to distinguish between non-fiction narrative style and fiction descriptive style.
Text Classification
Transformers

F
bekushal
75
1
Nuner V1 Orgs
A model fine-tuned from FewNERD-fine-supervised based on numind/NuNER-v1.0 for recognizing organizational entities (ORG) in text
Sequence Labeling
Transformers Supports Multiple Languages

N
guishe
6,836
2
Roberta Large Llm Content Detector
Other
A fine-tuned RoBERTa Large model for detecting AI-generated content
Text Classification
Transformers English

R
SuperAnnotate
21.51k
7
Roberta Base Topic Classification Nyt News
MIT
A news topic classification model fine-tuned based on roberta-base, trained on the New York Times news dataset with an accuracy of 0.91.
Text Classification
Transformers

R
dstefa
14.09k
9
Scamllm
A malicious prompt detection model fine-tuned on RoBERTa, capable of identifying prompts that induce the generation of phishing websites/emails
Text Classification
Transformers

S
phishbot
294
7
Historical Newspaper Ner
A named entity recognition model fine-tuned based on Roberta-large, specifically designed for historical newspaper texts that may contain OCR errors.
Sequence Labeling
Transformers English

H
dell-research-harvard
209
8
Centralbankroberta Sentiment Classifier
MIT
A fine-tuned large language model optimized for central bank communications, including economic entity classifier and sentiment classifier
Large Language Model
Transformers English

C
Moritz-Pfeifer
7,351
5
FOMC RoBERTa
Fine-tuned RoBERTa model for FOMC hawkish-dovish-neutral classification tasks
Text Classification
Transformers English

F
gtfintechlab
654
9
Roberta Gen News
MIT
A news text gap-filling prediction model fine-tuned based on roberta-base, trained on approximately 13 million English news articles
Large Language Model
Transformers English

R
AndyReas
17
1
Deproberta Large Depression
A depression detection model fine-tuned based on RoBERTa-large, used to analyze the level of depression in English social media posts, categorized into no depression, moderate, or severe.
Text Classification
Transformers English

D
rafalposwiata
987
7
Roberta Qa Japanese
MIT
Japanese extractive QA model fine-tuned from rinna/japanese-roberta-base, trained on the JaQuAD dataset
Question Answering System
Transformers Japanese

R
tsmatz
104
9
Deproberta Large V1
A language model based on the RoBERTa-large architecture, pre-trained on depression-themed posts from Reddit, specifically designed for depression detection.
Large Language Model
Transformers English

D
rafalposwiata
17
1
Roberta Base Squad
MIT
A question-answering model fine-tuned based on the roberta-base model, trained on SQuAD format datasets
Question Answering System
Transformers

R
DLL888
14
0
Roberta Base Mnli Uf Ner 1024 Train V0
MIT
A fine-tuned version of the RoBERTa-base model on the MNLI dataset, suitable for natural language inference tasks
Large Language Model
Transformers

R
mariolinml
26
1
Roberta Large Semeval2012 V2 Mask Prompt A Nce
RelBERT is a semantic relation understanding model fine-tuned on RoBERTa-large, specifically designed for lexical relation classification and analogy question tasks.
Text Embedding
Transformers

R
research-backup
16
0
Legal Roberta Base Cuad
Apache-2.0
A legal text processing model based on the RoBERTa architecture, fine-tuned on the CUAD dataset
Large Language Model
Transformers

L
alex-apostolo
165
0
Nf Cats
MIT
A RoBERTa-based QA classification model for identifying categories of non-factual questions
Text Classification
Transformers English

N
Lurunchik
245
5
Roberta Base Tweetner7 All
A named entity recognition model fine-tuned on the tweetner7 dataset based on roberta-base, specifically designed for entity recognition in Twitter text.
Sequence Labeling
Transformers

R
tner
30
0
Xlm Roberta Large Ner Hrl Finetuned Ner
A named entity recognition model fine-tuned on a toy dataset based on the xlm-roberta-large-ner-hrl model
Sequence Labeling
Transformers

X
kinanmartin
29
0
Xlm Roberta Base Finetuned Panx All
MIT
A fine-tuned version based on the XLM-RoBERTa-base model on a specific dataset, primarily used for sequence labeling tasks, with an evaluated F1 score of 0.8561.
Large Language Model
Transformers

X
huangjia
29
0
Roberta Large Tweetner7 All
A named entity recognition model fine-tuned on the tner/tweetner7 dataset based on roberta-large, specifically designed for entity recognition in Twitter text
Sequence Labeling
Transformers

R
tner
170.06k
1
Twitter Roberta Base WNUT
A named entity recognition model fine-tuned on the WNUT 17 dataset based on the Twitter RoBERTa model, used to identify specific entity categories in text.
Sequence Labeling
Transformers

T
emilys
16
0
Roberta Base Ner Demo
Named Entity Recognition (NER) model fine-tuned based on the Mongolian RoBERTa-base model
Sequence Labeling
Transformers Other

R
Buyandelger
15
0
Roberta Base Squad2 Finetuned Squad
This model is a fine-tuned version of deepset/roberta-base-squad2, suitable for question answering tasks.
Question Answering System
Transformers

R
ms12345
14
0
Roberta Base Biomedical Clinical Es Finetuned Ner
Apache-2.0
A Spanish biomedical clinical text named entity recognition model fine-tuned based on PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
Sequence Labeling
Transformers

R
asdc
15
0
Bert Finetuned Protagonist English
BERT model fine-tuned based on roberta-large-ner-english for protagonist identification tasks
Sequence Labeling
Transformers

B
airi
15
0
Roberta Base Finetuned Squad 2
MIT
A question-answering model fine-tuned on the SQuAD dataset based on RoBERTa-base
Question Answering System
Transformers

R
huxxx657
15
0
Nbme Roberta Large
MIT
A model fine-tuned based on roberta-large for specific task processing, with an evaluation loss value of 0.7825
Large Language Model
Transformers

N
smeoni
35
0
Mex Rbta Opinion Polarity
Apache-2.0
A sentiment polarity analysis model fine-tuned based on PlanTL-GOB-ES/roberta-base-bne for Spanish text sentiment classification
Text Classification
Transformers

M
javilonso
26
1
Bsc Bio Ehr Es Pharmaconer
Apache-2.0
This is a Spanish biomedical model based on RoBERTa, specifically fine-tuned for named entity recognition tasks on the PharmaCoNER dataset.
Sequence Labeling
Transformers Spanish

B
PlanTL-GOB-ES
250
2
- 1
- 2
Featured Recommended AI Models